Goto

Collaborating Authors

 fast autoaugment


Fast AutoAugment

Neural Information Processing Systems

Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment \cite{cubuk2018autoaugment} has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet.



Data Augmentation For Small Object using Fast AutoAugment

Yoon, DaeEun, Kim, Semin, Yoo, SangWook, Lee, Jongha

arXiv.org Artificial Intelligence

In recent years, there has been tremendous progress in object detection performance. However, despite these advances, the detection performance for small objects is significantly inferior to that of large objects. Detecting small objects is one of the most challenging and important problems in computer vision. To improve the detection performance for small objects, we propose an optimal data augmentation method using Fast AutoAugment. Through our proposed method, we can quickly find optimal augmentation policies that can overcome degradation when detecting small objects, and we achieve a 20% performance improvement on the DOTA dataset.


Reviews: Fast AutoAugment

Neural Information Processing Systems

While I feel that the new random baselines significantly strengthen the paper's results on CIFAR-100, random baselines are not provided for CIFAR-10, SVHN, or ImageNet. I've updated my score from a 6 to an 7, based on the random baselines for CIFAR-100 and the authors' promise to clarify their evaluation measure in the final submission. However, Cubuk et al.'s original algorithm is extremely resource-intensive. The main contribution of this paper is an algorithm that can operate on the same search space and come up with data augmentation schemes orders of magnitude more efficiently. The most closely related work I'm aware of is Population Based Augmentation (ICML 2019), which tries to solve the same problem in a different way.


Reviews: Fast AutoAugment

Neural Information Processing Systems

This paper is concerned with automating the search for data augmentation transformations for image classification with DNN models. It does so in a way that avoids having to re-train (or fine-tune) the model for every transformation scored. This leads to a method which, compared to previous SotA (AutoAugment), is very much faster, but is shown to provide results of similar quality. While both this work and AutoAugment use a carefully choosen search space, for which neither is strongly outperforming random search over this space, the dramatic reduction in resource need over AutoAugment justifies its publication. However, the authors are asked provide further results in the final version, in particular a more thorough comparison against random search baselines with the same advanced search space, also including random repetitions, in order to convince readers their method improves enough over random search in order to justify its added complexity.


Fast AutoAugment

Neural Information Processing Systems

Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment \cite{cubuk2018autoaugment} has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet.


Fast AutoAugment

Lim, Sungbin, Kim, Ildoo, Kim, Taesup, Kim, Chiheon, Kim, Sungwoong

Neural Information Processing Systems

Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment \cite{cubuk2018autoaugment} has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet.


Fast AutoAugment

Lim, Sungbin, Kim, Ildoo, Kim, Taesup, Kim, Chiheon, Kim, Sungwoong

arXiv.org Machine Learning

Data augmentation is an indispensable technique to improve generalization and also to deal with imbalanced datasets. Recently, AutoAugment (Cubuk et al., 2019) has been proposed to automatically search augmentation policies from a dataset and has significantly improved performances on many image recognition tasks. However, its search method requires thousands of GPU hours to train even in a reduced setting. In this paper, we propose Fast AutoAugment algorithm that learns augmentation policies using a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while maintaining the comparable performances on the image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, and ImageNet.